目前,大多数社会机器人通过传感器与周围环境和人类相互作用,这些传感器是机器人的组成部分,这限制了传感器,人机相互作用和互换性的可用性。在许多应用中需要一种适合许多机器人的可穿戴传感器衣服。本文介绍了一个经济实惠的可穿戴传感器背心,以及带有物联网(物联网)的开源软件架构,用于社会人形机器人。背心由触摸,温度,手势,距离,视觉传感器和无线通信模块组成。 IOT功能允许机器人与人类和互联网一起与人类交互。设计的体系结构适用于任何具有通用图形处理单元(GPGPU),I2C / SPI总线,Internet连接和机器人操作系统(ROS)的任何社交机器人。此架构的模块化设计使开发人员能够轻松地添加/删除/更新复杂行为。所提出的软件架构提供IOT技术,GPGPU节点,I2C和SPI总线管理器,视听交互节点(语音到文本,文本到语音和图像理解),以及行为节点和其他节点之间的隔离。所提出的IOT解决方案包括机器人中的相关节点,RESTful Web服务和用户界面。我们使用HTTP协议作为与Internet的社会机器人双向通信的手段。开发人员可以在C,C ++和Python编程语言中轻松编辑或添加节点。我们的架构可用于为社会人形机器人设计更复杂的行为。
translated by 谷歌翻译
基于敏感数据的机器学习模型在现实世界的承诺中,在医学筛查到疾病爆发,农业,工业,国防科学等地区的进步。在许多应用中,学习参与者通信转舍受益于收集自己的私​​有数据集,在真实数据上教导详细的机器学习模型,并共享使用这些模型的好处。由于现有的隐私和安全问题,大多数人都避免敏感数据分享进行培训。如果没有每个用户向中央服务器演示其本地数据,联邦学习允许各方共同地在其共享数据上培训机器学习算法。这种集体隐私学习方法导致培训期间的重要沟通。大多数大型机器学习应用程序需要基于各种设备和地点生成的数据集的分散学习。这样的数据集代表了分散学习的基本障碍,因为它们的各种环境有助于跨设备和位置的数据交付的显着差异。研究人员提出了几种方法来实现联邦学习系统中的数据隐私。但是,仍存在均匀的本地数据仍存在挑战。该研究方法是选择节点(用户)以在联合学习中共享他们的数据,以便为基于独立的数据的平衡来提高准确性,降低培训时间和增加收敛。因此,本研究介绍了基于名为DQRE-SCNet的光谱聚类的组合的深度QREInforceNce学习合奏,以在每个通信中选择设备的子集。基于结果,展示了可以减少联合学习所需的通信轮数量。
translated by 谷歌翻译
在开放世界学习中,代理商从一组已知类,检测和管理它不知道的事情,并从非静止数据流中随时间了解它们。开放世界学习与众多其他学习问题不同,本文简要介绍了各种问题之间的关键差异,包括增量学习,广义新奇发现和广义零射击学习。本文规范了各种开放世界学习问题,包括没有标签的开放世界学习。这些开放世界问题可以通过对已知元素的修改来解决,我们提出了一个新的框架,使代理能够组合各种模块用于新颖性检测,新颖性表征,增量学习和实例管理,以从未标记的流学习新类数据以无人监督的方式,调查如何适应一些最先进的技术来符合框架,并使用它们在没有标签问题的情况下为开放世界学习的性能定义七个基线。然后,我们讨论开放世界的学习质量,并分析如何改善实例管理。我们还讨论了没有标签的开放世界学习中发生的一些普遍歧义问题。
translated by 谷歌翻译
Code generation from text requires understanding the user's intent from a natural language description (NLD) and generating an executable program code snippet that satisfies this intent. While recent pretrained language models (PLMs) demonstrate remarkable performance for this task, these models fail when the given NLD is ambiguous due to the lack of enough specifications for generating a high-quality code snippet. In this work, we introduce a novel and more realistic setup for this task. We hypothesize that ambiguities in the specifications of an NLD are resolved by asking clarification questions (CQs). Therefore, we collect and introduce a new dataset named CodeClarQA containing NLD-Code pairs with created CQAs. We evaluate the performance of PLMs for code generation on our dataset. The empirical results support our hypothesis that clarifications result in more precise generated code, as shown by an improvement of 17.52 in BLEU, 12.72 in CodeBLEU, and 7.7\% in the exact match. Alongside this, our task and dataset introduce new challenges to the community, including when and what CQs should be asked.
translated by 谷歌翻译
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data is stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of the data (synopsis) that closely replicates the behavior of the actual data, which can be useful where an approximate answer to the queries would be acceptable in a fraction of the real execution time. In this paper, we discuss the use of Generative Adversarial Networks (GANs) for generating tabular data that can be employed in AQP for synopsis construction. We first discuss the challenges associated with constructing synopses in relational databases and then introduce solutions to those challenges. Following that, we organized statistical metrics to evaluate the quality of the generated synopses. We conclude that tabular data complexity makes it difficult for algorithms to understand relational database semantics during training, and improved versions of tabular GANs are capable of constructing synopses to revolutionize data-driven decision-making systems.
translated by 谷歌翻译
Hawkes processes have recently risen to the forefront of tools when it comes to modeling and generating sequential events data. Multidimensional Hawkes processes model both the self and cross-excitation between different types of events and have been applied successfully in various domain such as finance, epidemiology and personalized recommendations, among others. In this work we present an adaptation of the Frank-Wolfe algorithm for learning multidimensional Hawkes processes. Experimental results show that our approach has better or on par accuracy in terms of parameter estimation than other first order methods, while enjoying a significantly faster runtime.
translated by 谷歌翻译
Graph neural networks have shown to learn effective node representations, enabling node-, link-, and graph-level inference. Conventional graph networks assume static relations between nodes, while relations between entities in a video often evolve over time, with nodes entering and exiting dynamically. In such temporally-dynamic graphs, a core problem is inferring the future state of spatio-temporal edges, which can constitute multiple types of relations. To address this problem, we propose MTD-GNN, a graph network for predicting temporally-dynamic edges for multiple types of relations. We propose a factorized spatio-temporal graph attention layer to learn dynamic node representations and present a multi-task edge prediction loss that models multiple relations simultaneously. The proposed architecture operates on top of scene graphs that we obtain from videos through object detection and spatio-temporal linking. Experimental evaluations on ActionGenome and CLEVRER show that modeling multiple relations in our temporally-dynamic graph network can be mutually beneficial, outperforming existing static and spatio-temporal graph neural networks, as well as state-of-the-art predicate classification methods.
translated by 谷歌翻译
The Longest Common Subsequence (LCS) is the problem of finding a subsequence among a set of strings that has two properties of being common to all and is the longest. The LCS has applications in computational biology and text editing, among many others. Due to the NP-hardness of the general longest common subsequence, numerous heuristic algorithms and solvers have been proposed to give the best possible solution for different sets of strings. None of them has the best performance for all types of sets. In addition, there is no method to specify the type of a given set of strings. Besides that, the available hyper-heuristic is not efficient and fast enough to solve this problem in real-world applications. This paper proposes a novel hyper-heuristic to solve the longest common subsequence problem using a novel criterion to classify a set of strings based on their similarity. To do this, we offer a general stochastic framework to identify the type of a given set of strings. Following that, we introduce the set similarity dichotomizer ($S^2D$) algorithm based on the framework that divides the type of sets into two. This algorithm is introduced for the first time in this paper and opens a new way to go beyond the current LCS solvers. Then, we present a novel hyper-heuristic that exploits the $S^2D$ and one of the internal properties of the set to choose the best matching heuristic among a set of heuristics. We compare the results on benchmark datasets with the best heuristics and hyper-heuristics. The results show a higher performance of our proposed hyper-heuristic in both quality of solutions and run time factors.
translated by 谷歌翻译
Continuous behavioural authentication methods add a unique layer of security by allowing individuals to verify their unique identity when accessing a device. Maintaining session authenticity is now feasible by monitoring users' behaviour while interacting with a mobile or Internet of Things (IoT) device, making credential theft and session hijacking ineffective. Such a technique is made possible by integrating the power of artificial intelligence and Machine Learning (ML). Most of the literature focuses on training machine learning for the user by transmitting their data to an external server, subject to private user data exposure to threats. In this paper, we propose a novel Federated Learning (FL) approach that protects the anonymity of user data and maintains the security of his data. We present a warmup approach that provides a significant accuracy increase. In addition, we leverage the transfer learning technique based on feature extraction to boost the models' performance. Our extensive experiments based on four datasets: MNIST, FEMNIST, CIFAR-10 and UMDAA-02-FD, show a significant increase in user authentication accuracy while maintaining user privacy and data security.
translated by 谷歌翻译
Current pre-trained language models rely on large datasets for achieving state-of-the-art performance. However, past research has shown that not all examples in a dataset are equally important during training. In fact, it is sometimes possible to prune a considerable fraction of the training set while maintaining the test performance. Established on standard vision benchmarks, two gradient-based scoring metrics for finding important examples are GraNd and its estimated version, EL2N. In this work, we employ these two metrics for the first time in NLP. We demonstrate that these metrics need to be computed after at least one epoch of fine-tuning and they are not reliable in early steps. Furthermore, we show that by pruning a small portion of the examples with the highest GraNd/EL2N scores, we can not only preserve the test accuracy, but also surpass it. This paper details adjustments and implementation choices which enable GraNd and EL2N to be applied to NLP.
translated by 谷歌翻译